这项工作引入了3D分子生成的扩散模型,该模型与欧几里得转化一样。我们的e(3)e象扩散模型(EDM)学会了通过均衡网络的扩散过程,该网络共同在连续(原子坐标)和分类特征(原子类型)上共同运行。此外,我们提供了一种概率分析,该分析使用我们的模型接受了分子的可能性计算。在实验上,所提出的方法显着优于先前关于生成样品质量和训练时效率的3D分子生成方法。
translated by 谷歌翻译
本文介绍了欧几里德对称的生成模型:E(n)等分反的归一化流量(E-NFS)。为了构建E-NFS,我们采用鉴别性E(n)图神经网络,并将它们集成为微分方程,以获得可逆的等式功能:连续时间归一化流量。我们展示了E-NFS在诸如DW4和LJ13的粒子系统中的文献中的基础和现有方法,以及QM9的分子在对数似然方面。据我们所知,这是第一次流动,共同生成3D中的分子特征和位置。
translated by 谷歌翻译
We consider the problem of estimating the interacting neighborhood of a Markov Random Field model with finite support and homogeneous pairwise interactions based on relative positions of a two-dimensional lattice. Using a Bayesian framework, we propose a Reversible Jump Monte Carlo Markov Chain algorithm that jumps across subsets of a maximal range neighborhood, allowing us to perform model selection based on a marginal pseudoposterior distribution of models. To show the strength of our proposed methodology we perform a simulation study and apply it to a real dataset from a discrete texture image analysis.
translated by 谷歌翻译
Markov链条具有可变长度是有用的解析随机模型,能够产生最静止的离散符号序列。这个想法是识别过去的过去,称为上下文,与预测未来的符号相关。有时单个状态是一个背景,并查看过去并找到这种特定状态,使得进一步过去无关紧要。具有此类属性的状态称为续订状态,它们可用于将链拆分为独立和相同的分布式块。为了识别具有可变长度的链条的续订状态,我们提出了使用内在贝叶斯因子来评估某些特定状态是更新状态的假设。在这种情况下,难度在于将随机上下文树的边缘后端分布集成在上下文树上的一般前提分布,在过渡概率之前,蒙特卡罗方法被应用。为了展示我们方法的强度,我们分析了从不同二进制模型模型生成的人工数据集和来自语言学领域的一个示例。
translated by 谷歌翻译
高水平的缺失数据和强大的类别不平衡是普遍存在的挑战,这些挑战通常在真实世界序列数据中同时呈现。现有方法分别接近这些问题,经常对底层数据生成过程进行显着假设,以减少缺失信息的影响。在这项工作中,我们可以利用展示如何普遍的自我监督训练方法,即自动评论预测编码(APC),以克服同时缺失的数据和类不平衡而没有强烈的假设。具体地,在合成数据集上,我们表明,通过使用APC,标准基线基本上得到改善,在高缺失和严重的阶级不平衡中产生最大的收益。我们进一步应用于两个现实世界医疗时间系列数据集的APC,并表明APC在所有设置中提高了分类性能,最终实现了最先进的AUPRC结果在物理体基准上。
translated by 谷歌翻译
Selecting the number of topics in LDA models is considered to be a difficult task, for which alternative approaches have been proposed. The performance of the recently developed singular Bayesian information criterion (sBIC) is evaluated and compared to the performance of alternative model selection criteria. The sBIC is a generalization of the standard BIC that can be implemented to singular statistical models. The comparison is based on Monte Carlo simulations and carried out for several alternative settings, varying with respect to the number of topics, the number of documents and the size of documents in the corpora. Performance is measured using different criteria which take into account the correct number of topics, but also whether the relevant topics from the DGPs are identified. Practical recommendations for LDA model selection in applications are derived.
translated by 谷歌翻译
Applying deep learning concepts from image detection and graph theory has greatly advanced protein-ligand binding affinity prediction, a challenge with enormous ramifications for both drug discovery and protein engineering. We build upon these advances by designing a novel deep learning architecture consisting of a 3-dimensional convolutional neural network utilizing channel-wise attention and two graph convolutional networks utilizing attention-based aggregation of node features. HAC-Net (Hybrid Attention-Based Convolutional Neural Network) obtains state-of-the-art results on the PDBbind v.2016 core set, the most widely recognized benchmark in the field. We extensively assess the generalizability of our model using multiple train-test splits, each of which maximizes differences between either protein structures, protein sequences, or ligand extended-connectivity fingerprints. Furthermore, we perform 10-fold cross-validation with a similarity cutoff between SMILES strings of ligands in the training and test sets, and also evaluate the performance of HAC-Net on lower-quality data. We envision that this model can be extended to a broad range of supervised learning problems related to structure-based biomolecular property prediction. All of our software is available as open source at https://github.com/gregory-kyro/HAC-Net/.
translated by 谷歌翻译
Counterfactual explanation is a common class of methods to make local explanations of machine learning decisions. For a given instance, these methods aim to find the smallest modification of feature values that changes the predicted decision made by a machine learning model. One of the challenges of counterfactual explanation is the efficient generation of realistic counterfactuals. To address this challenge, we propose VCNet-Variational Counter Net-a model architecture that combines a predictor and a counterfactual generator that are jointly trained, for regression or classification tasks. VCNet is able to both generate predictions, and to generate counterfactual explanations without having to solve another minimisation problem. Our contribution is the generation of counterfactuals that are close to the distribution of the predicted class. This is done by learning a variational autoencoder conditionally to the output of the predictor in a join-training fashion. We present an empirical evaluation on tabular datasets and across several interpretability metrics. The results are competitive with the state-of-the-art method.
translated by 谷歌翻译
Despite their impressive performance on diverse tasks, large language models (LMs) still struggle with tasks requiring rich world knowledge, implying the limitations of relying solely on their parameters to encode a wealth of world knowledge. This paper aims to understand LMs' strengths and limitations in memorizing factual knowledge, by conducting large-scale knowledge probing experiments of 10 models and 4 augmentation methods on PopQA, our new open-domain QA dataset with 14k questions. We find that LMs struggle with less popular factual knowledge, and that scaling fails to appreciably improve memorization of factual knowledge in the tail. We then show that retrieval-augmented LMs largely outperform orders of magnitude larger LMs, while unassisted LMs remain competitive in questions about high-popularity entities. Based on those findings, we devise a simple, yet effective, method for powerful and efficient retrieval-augmented LMs, which retrieves non-parametric memories only when necessary. Experimental results show that this significantly improves models' performance while reducing the inference costs.
translated by 谷歌翻译
We introduce the Conditional Independence Regression CovariancE (CIRCE), a measure of conditional independence for multivariate continuous-valued variables. CIRCE applies as a regularizer in settings where we wish to learn neural features $\varphi(X)$ of data $X$ to estimate a target $Y$, while being conditionally independent of a distractor $Z$ given $Y$. Both $Z$ and $Y$ are assumed to be continuous-valued but relatively low dimensional, whereas $X$ and its features may be complex and high dimensional. Relevant settings include domain-invariant learning, fairness, and causal learning. The procedure requires just a single ridge regression from $Y$ to kernelized features of $Z$, which can be done in advance. It is then only necessary to enforce independence of $\varphi(X)$ from residuals of this regression, which is possible with attractive estimation properties and consistency guarantees. By contrast, earlier measures of conditional feature dependence require multiple regressions for each step of feature learning, resulting in more severe bias and variance, and greater computational cost. When sufficiently rich features are used, we establish that CIRCE is zero if and only if $\varphi(X) \perp \!\!\! \perp Z \mid Y$. In experiments, we show superior performance to previous methods on challenging benchmarks, including learning conditionally invariant image features.
translated by 谷歌翻译